Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
You're currently offline. Some features may not work.
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
⚡ Vectorized Execution
Query Processing, SIMD, Columnar Storage, Batch Processing
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
20712
posts in
264.4
ms
TimelyFreeze
: Adaptive Parameter Freezing Mechanism for Pipeline
Parallelism
arxiv.org
·
1d
🚀
Async Optimization
feldera/feldera
: The
Feldera
Incremental
Computation Engine
github.com
·
8h
⚡
DataFusion
How I
squeezed
a
BERT
sentiment analyzer into 1GB RAM on a $5 VPS
mohammedeabdelaziz.github.io
·
3h
·
Discuss:
Hacker News
🏗️
LLM Infrastructure
Open source
USearch
library
jumpstarts
ScyllaDB vector search
thenewstack.io
·
1d
🎨
ChromaDB
The Top 10 Best
Practices
for AI/BI
Dashboards
Performance Optimization (Part 2)
databricks.com
·
2d
⚡
SQL Optimization
Retro PC breakthrough:
NVMe
SSD running on
Pentium
III via PCIe slot adapter
generationamiga.com
·
1h
⚙️
Mechanical Sympathy
Taming the Regex Monster: Optimizing Massive
Literal
Alternations
modern-c.blogspot.com
·
1d
·
Discuss:
r/golang
🔍
RegEx Engines
Why Real-Time Execution Is Now Expected in
Lakehouse
Architectures
singlestore.com
·
1d
📦
In-process Databases
ggml
: backend-agnostic tensor parallelism by
JohannesGaessler
· Pull Request #19378
github.com
·
1d
·
Discuss:
r/LocalLLaMA
⚡
Hardware Acceleration
Speeding
Up
HTML
Generation by 2000%
bobrubbens.nl
·
1d
💾
Prompt Caching
Modern
Trends
In
Floating-Point
semiengineering.com
·
2d
⚡
Hardware Acceleration
Deterministic Retrieval at Scale: Optimal-Space
LCP
Indexing
and 308x Energy Reduction on Modern GPUs
arxiv.org
·
1d
🗂️
Vector Indexes
I Built a 6
BIPS
JIT
in Five Months
unlikelyemphasis.substack.com
·
1d
·
Discuss:
Substack
⚙️
Language Runtimes
From Questions to
Insights
: Data Analysis with
LangChain
’s Built-In Tools
pub.towardsai.net
·
2d
🏗️
LLM Infrastructure
How we cut
Vertex
AI latency by 35% with
GKE
Inference Gateway
cloud.google.com
·
1d
🧠
Inference Serving
Fast
Autoscheduling
for Sparse ML
Frameworks
ajroot.pl
·
2d
·
Discuss:
Hacker News
🕯️
Candle
Proposal: A Framework for
Discovering
Alien Physics via Optimal
Compression
lesswrong.com
·
23h
🎯
Vector Quantization
Hamming
Distance for Hybrid Search in
SQLite
notnotp.com
·
2d
💾
SQLite
How can
computing
for AI and other
demands
be more energy efficient?
techxplore.com
·
10m
🖥
GPUs
Optimized
LLM Inference
Engines
rishirajacharya.com
·
3d
🏗️
LLM Infrastructure
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help